skip to main content


Search for: All records

Creators/Authors contains: "Nguyen, Tan"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Self-attention is key to the remarkable success of transformers in sequence modeling tasks including many applications in natural language processing and computer vision. Like neural network layers, these attention mechanisms are often developed by heuristics and experience. To provide a principled framework for constructing attention layers in transformers, we show that the self-attention corresponds to the support vector expansion derived from a support vector regression problem, whose primal formulation has the form of a neural network layer. Using our framework, we derive popular attention layers used in practice and propose two new attentions: 1) the Batch Normalized Attention (Attention-BN) derived from the batch normalization layer and 2) the Attention with Scaled Head (Attention-SH) derived from using less training data to fit the SVR model. We empirically demonstrate the advantages of the Attention-BN and Attention-SH in reducing head redundancy, increasing the model's accuracy, and improving the model's efficiency in a variety of practical applications including image and time-series classification. 
    more » « less
  2. Protein-functionalized nanoparticles introduce a potentially novel drug delivery method for medical therapeutics, including involvement in cancer therapies and as contrast agents in imaging. Gold and silver nanoparticles are of particular interest due to their distinctive properties. Extensive research shows that gold nanoparticles demonstrate incredible photothermal properties and non-toxic behavior, while silver nanoparticles exhibit antibacterial properties but increase toxicity for human use. However, little is known regarding the properties or applications of hybrid silver-gold particles. This study measured the UV-Vis absorbance spectrum for 40 nm diameter Au, streptavidin-conjugated Au, Ag@Au hybrid, Ag nanoparticles, and Transient Absorbance Spectra of Au. Analysis indicates that the hybrid particles exhibit characteristics of both Ag and Au particles, implying potential applications similar to both Ag and Au nanoparticles. 
    more » « less
  3. Non-aqueous organic redox flow batteries (NAORFBs) are considered emerging large-scale energy storage systems due to their larger voltage window as compared to aqueous systems and their metal-free nature. However, low solubility, sustainability, and crossover of redox materials remain major challenges for the development of NAORFBs. Here, we report the use of redox active α-helical polypeptides suitable for NAORFBs. The polypeptides exhibit less crossover than small molecule analogs for both Daramic 175 separator and FAPQ 375 PP membrane, with FAPQ 375 PP preventing crossover most effectivley. Polypeptide NAORFBs assembled with a TEMPO-based polypeptide catholyte and viologen-based polypeptide anolyte exhibit low capacity fade ( ca. 0.1% per cycle over 500 cycles) and high coulombic efficiency (>99.5%). The polypeptide NAORFBs exhibit an output voltage of 1.1 V with a maximum capacity of 0.53 A h L −1 (39% of the theoretical capacity). After 500 charge–discharge cycles, 60% of the initial capacity was retained. Post cycling analysis using spectral and electrochemical methods demonstrate that the polypeptide backbone and the ester side chain linkages are stable during electrochemical cycling. Taken together, these polypeptides offer naturally-derived, deconstructable platforms for addressing the needs of metal-free energy storage. 
    more » « less
  4. We propose GRAph Neural Diffusion with a source term (GRAND++) for graph deep learning with a limited number of labeled nodes, i.e., low-labeling rate. GRAND++ is a class of continuous-depth graph deep learning architectures whose theoretical underpinning is the diffusion process on graphs with a source term. The source term guarantees two interesting theoretical properties of GRAND++: (i) the representation of graph nodes, under the dynamics of GRAND++, will not converge to a constant vector over all nodes even as the time goes to infinity, which mitigates the over-smoothing issue of graph neural networks and enables graph learning in very deep architectures. (ii) GRAND++ can provide accurate classification even when the model is trained with a very limited number of labeled training data. We experimentally verify the above two advantages on various graph deep learning benchmark tasks, showing a significant improvement over many existing graph neural networks. 
    more » « less
  5. We propose GRAph Neural Diffusion with a source term (GRAND++) for graph deep learning with a limited number of labeled nodes, i.e., low-labeling rate. GRAND++ is a class of continuous-depth graph deep learning architectures whose theoretical underpinning is the diffusion process on graphs with a source term. The source term guarantees two interesting theoretical properties of GRAND++: (i) the representation of graph nodes, under the dynamics of GRAND++, will not converge to a constant vector over all nodes even as the time goes to infinity, which mitigates the over-smoothing issue of graph neural networks and enables graph learning in very deep architectures. (ii) GRAND++ can provide accurate classification even when the model is trained with a very limited number of labeled training data. We experimentally verify the above two advantages on various graph deep learning benchmark tasks, showing a significant improvement over many existing graph neural networks. 
    more » « less